A Unified Approach to Error Bounds for Structured Convex Optimization Problems
نویسندگان
چکیده
Error bounds, which refer to inequalities that bound the distance of vectors in a test set to a given set by a residual function, have proven to be extremely useful in analyzing the convergence rates of a host of iterative methods for solving optimization problems. In this paper, we present a new framework for establishing error bounds for a class of structured convex optimization problems, in which the objective function is the sum of a smooth convex function and a general closed proper convex function. Such a class encapsulates not only fairly general constrained minimization problems but also various regularized loss minimization formulations in machine learning, signal processing, and statistics. Using our framework, we show that a number of existing error bound results can be recovered in a unified and transparent manner. To further demonstrate the power of our framework, we apply it to a class of nuclear-norm regularized loss minimization problems and establish a new error bound for this class under a strict complementarity-type regularity condition. We then complement this result by constructing an example to show that the said error bound could fail to hold without the regularity condition. We believe that our approach will find further applications in the study of error bounds for structured convex optimization problems.
منابع مشابه
Bundle Methods for Machine Learning
We present a globally convergent method for regularized risk minimization problems. Our method applies to Support Vector estimation, regression, Gaussian Processes, and any other regularized risk minimization setting which leads to a convex optimization problem. SVMPerf can be shown to be a special case of our approach. In addition to the unified framework we present tight convergence bounds, w...
متن کاملError bounds for rank constrained optimization problems
This paper is concerned with the rank constrained optimization problem whose feasible set is the intersection of the rank constraint set R = { X ∈ X | rank(X) ≤ κ } and a closed convex set Ω. We establish the local (global) Lipschitzian type error bounds for estimating the distance from any X ∈ Ω (X ∈ X) to the feasible set and the solution set, respectively, under the calmness of a multifuncti...
متن کاملConvergence rate analysis and error bounds for projection algorithms in convex feasibility problems
Convergence rate analysis and error bounds for projection algorithms in convex feasibility problems Amir Beck & Marc Teboulle To cite this article: Amir Beck & Marc Teboulle (2003) Convergence rate analysis and error bounds for projection algorithms in convex feasibility problems, Optimization Methods and Software, 18:4, 377-394, DOI: 10.1080/10556780310001604977 To link to this article: http:/...
متن کاملLinear Time Varying MPC Based Path Planning of an Autonomous Vehicle via Convex Optimization
In this paper a new method is introduced for path planning of an autonomous vehicle. In this method, the environment is considered cluttered and with some uncertainty sources. Thus, the state of detected object should be estimated using an optimal filter. To do so, the state distribution is assumed Gaussian. Thus the state vector is estimated by a Kalman filter at each time step. The estimation...
متن کاملA Method for Solving Convex Quadratic Programming Problems Based on Differential-algebraic equations
In this paper, a new model based on differential-algebraic equations(DAEs) for solving convex quadratic programming(CQP) problems is proposed. It is proved that the new approach is guaranteed to generate optimal solutions for this class of optimization problems. This paper also shows that the conventional interior point methods for solving (CQP) problems can be viewed as a special case of the n...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Math. Program.
دوره 165 شماره
صفحات -
تاریخ انتشار 2017